首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7957篇
  免费   545篇
  国内免费   349篇
电工技术   425篇
综合类   184篇
化学工业   212篇
金属工艺   274篇
机械仪表   770篇
建筑科学   232篇
矿业工程   39篇
能源动力   154篇
轻工业   44篇
水利工程   30篇
石油天然气   28篇
武器工业   43篇
无线电   1560篇
一般工业技术   393篇
冶金工业   34篇
原子能技术   67篇
自动化技术   4362篇
  2024年   5篇
  2023年   87篇
  2022年   153篇
  2021年   162篇
  2020年   159篇
  2019年   191篇
  2018年   165篇
  2017年   281篇
  2016年   294篇
  2015年   298篇
  2014年   476篇
  2013年   436篇
  2012年   391篇
  2011年   529篇
  2010年   412篇
  2009年   531篇
  2008年   518篇
  2007年   505篇
  2006年   457篇
  2005年   390篇
  2004年   337篇
  2003年   324篇
  2002年   258篇
  2001年   180篇
  2000年   169篇
  1999年   152篇
  1998年   145篇
  1997年   109篇
  1996年   95篇
  1995年   76篇
  1994年   75篇
  1993年   96篇
  1992年   68篇
  1991年   30篇
  1990年   37篇
  1989年   39篇
  1988年   40篇
  1987年   25篇
  1986年   27篇
  1985年   18篇
  1984年   33篇
  1983年   12篇
  1982年   17篇
  1981年   13篇
  1980年   13篇
  1979年   11篇
  1978年   4篇
  1977年   6篇
  1976年   1篇
  1975年   1篇
排序方式: 共有8851条查询结果,搜索用时 15 毫秒
91.
Self-adaptive surface measurements that can reduce data redundancy and improve time efficiency are in high demand in many fields of science and technology. For this purpose, a system implemented with Gaussian process (GP) adaptive sampling is developed. The non-parametric GP model is applied to reconstruct the topography and guide the subsequent sampling position, which is determined from the inference uncertainty estimation. A criterion is proposed to terminate the GP adaptive measurement automatically without any prior model or data of the topography. Experiments on typical surfaces validate the intelligence, adaptability, and high accuracy of the GP method along with the stabilization of the automatic iteration termination. Compared with traditional raster sampling, data redundancy is reduced and the time efficiency is improved without sacrificing the surface reconstruction accuracy. The proposed method can be implemented in other systems with similar measurement principles, thus benefitting surface characterizations.  相似文献   
92.
Due to the limited improvement of single-image based super-resolution (SR) methods in recent years, the reference based image SR (RefSR) methods, which super-resolve the low-resolution (LR) input with the guidance of similar high-resolution (HR) reference images are emerging. There are two main challenges in RefSR, i.e. reference image warping and exploring the guidance information from the warped references. For reference warping, we propose an efficient dense warping method to deal with large displacements, which is much faster than traditional patch (or texture) matching strategy. For the SR process, since different reference images complement each other, and have different similarities with the LR image, we further propose a similarity based feature fusion strategy to take advantage of the most similar reference regions. The SR process is realized by an encoder–decoder network and trained with pixel-level reconstruction loss, degradation loss and feature-level perceptual loss. Extensive experiments on three benchmark datasets demonstrate that the proposed method outperforms state-of-the-art SR methods in both subjective and objective measurements.  相似文献   
93.
This paper shows a secured key-based (k, n) threshold cryptography scheme, where key as well as secret data is shared among set of participants. In this scheme, each share has some bytes missing and these missing bytes can be recovered from a set of exactly k shares, but not less than k. Therefore, each share contains partial secret information which is also encrypted by DNA encoding scheme. Additionally, this paper introduces a concept, where each share information size is varied. More precisely, each share contains a different percentage of secret information and which is strongly dependent on the key. Moreover key sensitivity analysis and statistical analysis prove the robustness against different cryptanalysis attacks and thus ensures high acceptability for information security requirements.  相似文献   
94.
With the development of human modern society at the turn of the new century, more and more people tend to use new technologies. To keep in tandem with modern technologies, food storage became easier with domestic refrigerators. In such systems, frost formation can be a fundamental problem especially in tropical countries with hot and humid ambient conditions. An adaptive method for defrosting the evaporator of a domestic refrigerator system is presented in this paper. In this method, parameters namely open door time, compressor ON time, previous defrost duration, mode of compressor, fans, and heater action before, during, and after defrost have been considered effective on defrost operation. Experiments have been carried out on two different case study top mounted freezers. Experimental results for case study 1 showed that the adaptive defrost in comparison to the fixed defrost cycles could reduce the energy consumption and Total Equivalent Warming Impact (TEWI) about 13% and 12.5%, respectively. The same parameters for case study 2 were decreased 5.5% and 5.2%, respectively. Furthermore, temperature variation of the compartments for both case studies decreased in comparison to that of their corresponding base case, which prevents quality loss of the stored food during defrost. Briefly, the results showed lower energy consumption, more environmentally friendly effects, and lower compartment temperature change.  相似文献   
95.
Human Computer Interaction (HCI) is a research field which aims to improve the relationship between users and interactive computer systems. A main objective of this research area is to make the user experience more pleasant and efficient, minimizing the barrier between the users׳ cognition of what they want to accomplish and the computer׳s understanding of the user׳s tasks, by means of user-friendly, useful and usable designs. A bad HCI design is one of the main reasons behind user rejection of computer-based applications, which in turn produces loss of productivity and economy in industrial environments.In the eHealth domain, user rejection of computer-based systems is a major barrier to exploiting the maximum benefit from those applications developed to support the treatment of diseases, and in the worst cases a poor design in these systems may cause deterioration in the clinical condition of the patient. Thus, a high level of personalisation of the system according to users׳ needs is extremely important, making it easy to use and contributing to the system׳s efficacy, which in turn facilitates the empowerment of the target users. Ideally, the content offered through the interactive sessions in these applications should be continuously assessed and adapted to the changing condition of the patient. A good HCI design and development can improve the acceptance of these applications and contribute to promoting better adherence levels to the treatment, preventing the patient from further relapses.In this work, we present a mechanism to provide personalised and adaptive daily interactive sessions focused on the treatment of patients with Major Depression. These sessions are able to automatically adapt the content and length of the sessions to obtain personalised and varied sessions in order to encourage the continuous and long-term use of the system. The tailored adaptation of session content is supported by decision-making processes based on: (i) clinical requirements; (ii) the patient’s historical data; and (iii) current responses from the patient. We have evaluated our system through two different methodologies: the first one performing a set of simulations producing different sessions from changing input conditions, in order to assess different levels of adaptability and variability of the session content offered by the system. The second evaluation process involved a set of patients who used the system for 14–28 days and answered a questionnaire to provide feedback about the perceived level of adaptability and variability produced by the system. The obtained results in both evaluations indicated good levels of adaptability and variability in the content of the sessions according to the input conditions.  相似文献   
96.
In production systems, manufacturers face important decisions that affect system profit. In this paper, three of these decisions are modelled simultaneously: due date assignment, production scheduling, and outbound distribution scheduling. These three decisions are made in the sales, production planning and transportation departments. Recently, many researchers have devoted attention to the problem of integrating due date assignment, production scheduling and outbound distribution scheduling. In the present paper, the problems of minimizing costs associated with maximum tardiness, due date assignment and delivery for a single machine are considered. Mixed Integer Non-Linear Programming (MINLP) and a Mixed Integer Programming (MIP) are used for the solution. This problem is NP-hard, so two meta-heuristic algorithms, an Adaptive Genetic Algorithm (AGA) and a Parallel Simulated Annealing algorithm (PSA), are used for solution of large-scale instances. The present paper is the first time that crossover and mutation operators in AGA and neighbourhood generation in PSA have been used in the structure of optimal solutions. We used the Taguchi method to set the parameters, design of experiments (DOE) to generate experiments, and analysis of variance, the Friedman, Aligned Friedman, and Quade tests to analyse the results. Also, the robustness of the algorithms was addressed. The computational results showed that AGA performed better than PSA.  相似文献   
97.
Subset Simulation is an adaptive simulation method that efficiently solves structural reliability problems with many random variables. The method requires sampling from conditional distributions, which is achieved through Markov Chain Monte Carlo (MCMC) algorithms. This paper discusses different MCMC algorithms proposed for Subset Simulation and introduces a novel approach for MCMC sampling in the standard normal space. Two variants of the algorithm are proposed: a basic variant, which is simpler than existing algorithms with equal accuracy and efficiency, and a more efficient variant with adaptive scaling. It is demonstrated that the proposed algorithm improves the accuracy of Subset Simulation, without the need for additional model evaluations.  相似文献   
98.
In this paper, adaptive robust control (ARC) of fully-constrained cable driven parallel robots is studied in detail. Since kinematic and dynamic models of the robot are partly structurally unknown in practice, in this paper an adaptive robust sliding mode controller is proposed based on the adaptation of the upper bound of the uncertainties. This approach does not require pre-knowledge of the uncertainties upper bounds and linear regression form of kinematic and dynamic models. Moreover, to ensure that all cables remain in tension, proposed control algorithm benefit the internal force concept in its structure. The proposed controller not only keeps all cables under tension for the whole workspace of the robot, it is chattering-free, computationally simple and it does not require measurement of the end-effector acceleration. The stability of the closed-loop system with proposed control algorithm is analyzed through Lyapunov second method and it is shown that the tracking error will remain uniformly ultimately bounded (UUB). Finally, the effectiveness of the proposed control algorithm is examined through some experiments on a planar cable driven parallel robot and it is shown that the proposed controller is able to provide suitable tracking performance in practice.  相似文献   
99.
Histogram equalization (HE) method proved to be a simple and most effective technique for contrast enhancement of digital images. However it does not preserve the brightness and natural appearance of the images, which is a major drawback. To overcome this limitation, several Bi- and Multi-HE methods have been proposed. Although the Bi-HE methods significantly enhance the contrast and may preserve the brightness, the natural appearance of the images is not preserved as these methods suffer with the problem of intensity saturation. While Multi-HE methods are proposed to further maintain the brightness and natural appearance of images, but at the cost of contrast enhancement. In this paper, two novel Multi-HE methods for contrast enhancement of natural images, while preserving the brightness and natural appearance of the images, have been proposed. The technique involves decomposing the histogram of an input image into multiple segments based on mean or median values as thresholds. The narrow range segments are identified and are allocated full dynamic range before applying HE to each segment independently. Finally the combined equalized histogram is normalized to avoid the saturation of intensities and un-even distribution of bins. Simulation results show that, for the variety of test images (120 images) the proposed method enhances contrast while preserving brightness and natural appearance and outperforms contemporary methods both qualitatively and quantitatively. The statistical consistency of results has also been verified through ANOVA statistical tool.  相似文献   
100.
In this paper we combine video compression and modern image processing methods. We construct novel iterative filter methods for prediction signals based on Partial Differential Equation (PDE) based methods. The mathematical framework of the employed diffusion filter class is given and some desirable properties are stated. In particular, two types of diffusion filters are constructed: a uniform diffusion filter using a fixed filter mask and a signal adaptive diffusion filter that incorporates the structures of the underlying prediction signal. The latter has the advantage of not attenuating existing edges while the uniform filter is less complex. The filters are embedded into a software based on HEVC with additional QTBT (Quadtree plus Binary Tree) and MTT (Multi-Type-Tree) block structure. In this setting, several measures to reduce the coding complexity of the tool are introduced, discussed and tested thoroughly. The coding complexity is reduced by up to 70% while maintaining over 80% of the gain. Overall, the diffusion filter method achieves average bitrate savings of 2.27% for Random Access having an average encoder runtime complexity of 119% and 117% decoder runtime complexity. For individual test sequences, results of 7.36% for Random Access are accomplished.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号